Final Jeopardy: Man vs. Machine and the Quest to Know Everything by Stephen Baker

Final Jeopardy: Man vs. Machine and the Quest to Know Everything by Stephen Baker

Author:Stephen Baker
Language: eng
Format: mobi
Tags: Jeopardy! (Television Program), Database Management, Semantic Computing, Artificial Intelligence, Natural Language Processing (Computer Science), Jeopardy (Television Program), Watson (Computer)
ISBN: 9780547483160
Publisher: Houghton Mifflin Harcourt
Published: 2011-02-16T23:00:00+00:00


Cloistered in a refrigerated room on the third floor of the Hawthorne labs stood another version of Watson. It turned out that the team needed two Watsons: the game player, engineered for speed, and this slower, steadier, and more forgiving system for development. The speedy Watson, its algorithms deployed across more than 2,000 processors, was a finicky beast and near impossible to tinker with. This slower Watson kept running while developers rewrote certain instructions, shifted out one algorithm for another, or refined its betting strategy. It took forty minutes to run a batch of questions, but it could handle two hundred at a time. Unlike the fast machine, it created meticulous records, and it permitted researchers to experiment, section by section, with its answering process. Because the team could fiddle with the slower machine, it was always up-to-date, usually a month or two ahead of its speedy sibling. After the debacle against Lindsay, IBM could only hope that the slower, smarter Watson wouldn’t have been so confused.

Within twenty-four hours, Ferrucci’s team had run all of that day’s games on the slow machine. The news was encouraging. It performed 10 percent better on the clues. The biggest difference, according to Eric Brown, was that some of the clues were topical, and speedy Watson’s most recent data came from 2008. “We got creamed on a couple categories that required much more current information,” he said.

Other recent adjustments in the slow Watson helped it deal with chronology. Keeping track of facts as they change over time is a chronic problem for AI systems, and Watson was no exception. In the recent sparring session, it had confused a mid-nineteenth-century novel for a late-twentieth-century pop duo. Yet when Ferrucci analyzed the slower Watson’s performance on the problematic Oliver Twist clue, he was relieved to see that a recent tweak had helped the machine match the clue to the right century. This fix in “temporal reasoning” pushed the Pet Shop Boys answer way down its list, from first to number 79. Watson’s latest top answer—“What is magician?”—was still wrong but not as laughable. “It still knows nothing about Oliver Twist,” Ferrucci wrote in a late-night e-mail.

While Ferrucci and a handful of team members attended every sparring match in the winter of 2010, Jennifer Chu-Carroll generally stayed away. For her, their value was in the data they produced, not the spectacle, and much less the laughs. As she saw it, the team had a long list of improvements to make before autumn. By that point, the immense collection of software running Watson would be locked down—frozen. After that, the only tinkering would be in a few peripheral applications, like game strategy. But the central operations of the computer, like those of other mission-critical systems, would go through little but testing during the months leading up to the Jeopardy showdown. Engineers didn’t dare tinker with Space Shuttle software once the vessel was headed toward the launch pad. Watson would get similar treatment.

With each sparring session, however, the list of fixes was getting longer.



Download



Copyright Disclaimer:
This site does not store any files on its server. We only index and link to content provided by other sites. Please contact the content providers to delete copyright contents if any and email us, we'll remove relevant links or contents immediately.